hysop.operator.hdf_io module¶
I/O operators
HDF_Writer
: operator to write fields into an hdf fileHDF_Reader
: operator to read fields from an hdf fileHDF_IO
abstract interface for hdf io classes
- class hysop.operator.hdf_io.HDF_IO(var_names=None, name_prefix='', name_postfix='', force_backend=None, **kwds)[source]¶
Bases:
HostOperatorBase
Abstract interface to read/write from/to hdf files, for hysop fields.
Read/write some fields data from/into hdf/xmdf files. Parallel io.
- Parameters:
var_names (a dictionnary, optional) – keys =
Field
, values = string, field name. See notes below.name_prefix (str, optional) – Optional name prefix for variables.
name_postfix (str, optional) – Optional name postfix for variables.
force_backend (hysop.constants.Backend) – Force the source backend for fields.
kwds (dict) – Base class arguments.
Notes
Dataset in hdf files are identified with names. In hysop, when writing a file, default dataset name is ‘continuous_field.name + topo.id + component direction’, but this might be changed thanks to var_names argument. For example, if input_fields=[velo, vorti], and if hdf file contains ‘vel_1_X, vel_1_Y, vel_1_Z, dat_2_X, dat_2_Y, dat_2_Z’ keys, then use : var_names = {velo: ‘vel’, vorti:’dat’} if you want to read vel/dat into velo/vorti.
- create_topology_descriptors()[source]¶
- Called in get_field_requirements, just after handle_method
- Topology requirements (or descriptors) are:
min and max ghosts for each input and output variables
allowed splitting directions for cartesian topologies
- discretize()[source]¶
By default, an operator discretize all its variables. For each input continuous field that is also an output field, input topology may be different from the output topology.
After this call, one can access self.input_discrete_fields and self.output_discrete_fields, which contains input and output dicretised fields mapped by continuous fields.
self.discrete_fields will be a tuple containing all input and output discrete fields.
Discrete tensor fields are built back from discretized scalar fields and are accessible from self.input_tensor_fields, self.output_tensor_fields and self.discrete_tensor_fields like their scalar counterpart.
- get_field_requirements()[source]¶
Called just after handle_method(), ie self.method has been set. Field requirements are:
required local and global transposition state, if any.
required memory ordering (either C or Fortran)
Default is Backend.HOST, no min or max ghosts, MemoryOrdering.ANY and no specific default transposition state for each input and output variables.
- get_node_requirements()[source]¶
Called after get_field_requirements to get global operator requirements.
- By default we enforce unique:
*transposition state *cartesian topology shape *memory order (either C or fortran)
Across every fields.
- classmethod supported_backends()[source]¶
Return the backends that this operator’s topologies can support.
- class hysop.operator.hdf_io.HDF_Reader(variables, restart=None, name=None, **kwds)[source]¶
Bases:
HDF_IO
Parallel reading of hdf/xdmf files to fill some fields in.
Read some fields data from hdf/xmdf files. Parallel readings.
- Parameters:
restart (int, optional) – number of a specific iteration to be read, default=None, i.e. read first iteration.
kwds (base class arguments)
Notes (restart corresponds to the number which appears in)
name (the hdf file)
the (corresponding to the number of)
occured. (iteration where writing)
tests_hdf_io.py (See examples in)
- apply(**kwds)¶
Abstract method that should be implemented. Applies this node (operator, computational graph operator…).
- class hysop.operator.hdf_io.HDF_Writer(variables, name=None, pretty_name=None, **kwds)[source]¶
Bases:
HDF_IO
Print field(s) values on a given topo, in HDF5 format.
Write some fields data into hdf/xmdf files. Parallel writings.
- Parameters:
kwds (base class arguments)
- apply(**kwds)¶
Abstract method that should be implemented. Applies this node (operator, computational graph operator…).
- finalize()[source]¶
Cleanup this node (free memory from external solvers, …) By default, this does nothing
- get_work_properties(**kwds)[source]¶
Returns extra memory requirements of this operator. This allows operators to request for temporary buffers that will be shared between operators in a graph to reduce the memory footprint and the number of allocations.
Returned memory is only usable during operator call (ie. in self.apply). Temporary buffers may be shared between different operators as determined by the graph builder.
By default if there is no input nor output temprary fields, this returns no requests, meanning that this node requires no extra buffers.
If temporary fields are present, their memory request are automatically computed and returned.
- setup(work, **kwds)[source]¶
Setup temporary buffer that have been requested in get_work_properties(). This function may be used to execute post allocation routines. This sets self.ready flag to True. Once this flag is set one may call ComputationalGraphNode.apply() and ComputationalGraphNode.finalize().
Automatically honour temporary field memory requests.